24 research outputs found

    Compositional sequence labeling models for error detection in learner writing

    Get PDF
    © 2016 Association for Computational Linguistics. In this paper, we present the first experiments using neural network models for the task of error detection in learner writing. We perform a systematic comparison of alternative compositional architectures and propose a framework for error detection based on bidirectional LSTMs. Experiments on the CoNLL-14 shared task dataset show the model is able to outperform other participants on detecting errors in learner writing. Finally, the model is integrated with a publicly deployed self-assessment system, leading to performance comparable to human annotators

    Automatic text scoring using neural networks

    Get PDF
    Automated Text Scoring (ATS) provides a cost-effective and consistent alternative to human marking. However, in order to achieve good performance, the predictive features of the system need to be manually engineered by human experts. We introduce a model that forms word representations by learning the extent to which specific words contribute to the text’s score. Using Long-Short Term Memory networks to represent the meaning of texts, we demonstrate that a fully automated framework is able to achieve excellent results over similar approaches. In an attempt to make our results more interpretable, and inspired by recent advances in visualizing neural networks, we introduce a novel method for identifying the regions of the text that the model has found more discriminative.This is the accepted manuscript. It is currently embargoed pending publication

    Investigating the effect of auxiliary objectives for the automated grading of learner english speech transcriptions

    Get PDF
    We address the task of automatically grading the language proficiency of spontaneous speech based on textual features from automatic speech recognition transcripts. Motivated by recent advances in multi-task learning, we develop neural networks trained in a multi-task fashion that learn to predict the proficiency level of non-native English speakers by taking advantage of inductive transfer between the main task (grading) and auxiliary prediction tasks: morpho-syntactic labeling, language modeling, and native language identification (L1). We encode the transcriptions with both bi-directional recurrent neural networks and with bi-directional representations from transformers, compare against a feature-rich baseline, and analyse performance at different proficiency levels and with transcriptions of varying error rates. Our best performance comes from a transformer encoder with L1 prediction as an auxiliary task. We discuss areas for improvement and potential applications for text-only speech scoring.Cambridge Assessmen

    CAMsterdam at SemEval-2019 task 6: Neural and graph-based feature extraction for the identification of offensive tweets

    Get PDF
    We describe the CAMsterdam team entry to the SemEval-2019 Shared Task 6 on offen-sive language identification in Twitter data.Our proposed model learns to extract tex-tual features using a multi-layer recurrent net-work, and then performs text classification us-ing gradient-boosted decision trees (GBDT). A self-attention architecture enables the model to focus on the most relevant areas in the text.We additionally learn globally optimised em-beddings for hashtags using node2vec, which are given as additional tweet features to the GBDT classifier.Our best model obtains78.79% macro F1-score on detecting offensive language (subtask A), 66.32% on categorising offence types (targeted/untargeted; subtask B),and 55.36% on identifying the target of of-fence (subtask C)
    corecore